2,943 research outputs found
Characterization of count data distributions involving additivity and binomial subsampling
In this paper we characterize all the -parameter families of count
distributions (satisfying mild conditions) that are closed under addition and
under binomial subsampling. Surprisingly, few families satisfy both properties
and the resulting models consist of the th-order univariate Hermite
distributions. Among these, we find the Poisson () and the ordinary
Hermite distributions ().Comment: Published at http://dx.doi.org/10.3150/07-BEJ6021 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Regular Expression Search on Compressed Text
We present an algorithm for searching regular expression matches in
compressed text. The algorithm reports the number of matching lines in the
uncompressed text in time linear in the size of its compressed version. We
define efficient data structures that yield nearly optimal complexity bounds
and provide a sequential implementation --zearch-- that requires up to 25% less
time than the state of the art.Comment: 10 pages, published in Data Compression Conference (DCC'19
On the Use of Quasiorders in Formal Language Theory
In this thesis we use quasiorders on words to offer a new perspective on two
well-studied problems from Formal Language Theory: deciding language inclusion
and manipulating the finite automata representations of regular languages.
First, we present a generic quasiorder-based framework that, when instantiated
with different quasiorders, yields different algorithms (some of them new) for
deciding language inclusion. We then instantiate this framework to devise an
efficient algorithm for searching with regular expressions on
grammar-compressed text. Finally, we define a framework of quasiorder-based
automata constructions to offer a new perspective on residual automata.Comment: PhD thesi
Performance Evaluation of cuDNN Convolution Algorithms on NVIDIA Volta GPUs
Convolutional neural networks (CNNs) have recently attracted considerable attention due to their outstanding accuracy in applications, such as image recognition and natural language processing. While one advantage of the CNNs over other types of neural networks is their reduced computational cost, faster execution is still desired for both training and inference. Since convolution operations pose most of the execution time, multiple algorithms were and are being developed with the aim of accelerating this type of operations. However, due to the wide range of convolution parameter configurations used in the CNNs and the possible data type representations, it is not straightforward to assess in advance which of the available algorithms will be the best performing in each particular case. In this paper, we present a performance evaluation of the convolution algorithms provided by the cuDNN, the library used by most deep learning frameworks for their GPU operations. In our analysis, we leverage the convolution parameter configurations from widely used the CNNs and discuss which algorithms are better suited depending on the convolution parameters for both 32 and 16-bit floating-point (FP) data representations. Our results show that the filter size and the number of inputs are the most significant parameters when selecting a GPU convolution algorithm for 32-bit FP data. For 16-bit FP, leveraging specialized arithmetic units (NVIDIA Tensor Cores) is key to obtain the best performance.This work was supported by the European Union's Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie under Grant 749516, and in part by the Spanish Juan de la Cierva under Grant IJCI-2017-33511Peer ReviewedPostprint (published version
Hermenéutica del subjuntivo español: la deixis "Introversa"
El objetivo de este artículo es definir las formas del modo subjuntivo a través de variables no ligadas a la noción del tiempo. Por lo tanto, el subjuntivo se clasifica como "deixis introversa" porque, en lugar de referirse a hechos o estados ubicados por el orador en un tiempo, se desvía del plano temporal para poner algo explícitamente en la consciencia.The objective of this article is to define the forms of the subjunctive mood through variables not linked to the notion of time. Therefore, the subjunctive is classified as "deixis introversa" because, instead of referring to facts or states located by the speaker on a time, deviates from the temporary level to put something explicitly in the consciousness.notPeerReviewe
Algunas reflexiones sobre el ICME 10 y el PME 28
El Congreso Internacional de Educación Matemática (ICME) tiene lugar cada cuatro años. En esa ocasión, la reunión del Grupo Internacional para el Estudio de la Psicología en Educación Matemática (PME) se realiza en un lugar cercano. En este año, los países nórdicos estuvieron a cargo de la organización de estas dos conferencias. En esta reseña, describimos los aspectos más relevantes de estas dos reuniones, comentamos nuestra percepción sobre el estado de la comunidad de investigación en educación matemática, resaltamos el interés por los aspectos socioÐculturales de la educación matemática y reflexionamos sobre la participación latinoamericana en este tipo de reuniones internacionales
A Congruence-based Perspective on Automata Minimization Algorithms
In this work we use a framework of finite-state automata constructions based on equivalences over words to provide new insights on the relation between well-known methods for computing the minimal deterministic automaton of a language
ggplot2: Elegant Graphics for Data Analysis
Abstracts not available for BookReview
- …